C 您所在的位置:网站首页 Modern Fluoroscopy Imaging Systems Image C

C

2024-05-29 05:01| 来源: 网络整理| 查看: 265

Proc SPIE Int Soc Opt Eng. Author manuscript; available in PMC 2017 May 30.Published in final edited form as:Proc SPIE Int Soc Opt Eng. 2017 Feb 11; 10135: 101352K. Published online 2017 Mar 3. doi: 10.1117/12.2256028PMCID: PMC5449120NIHMSID: NIHMS858945PMID: 28572694C-arm Positioning Using Virtual Fluoroscopy for Image-Guided SurgeryT. De Silva,1 J. Punnoose,1 A. Uneri,1 J. Goerres,1 M. Jacobson,1 M. D. Ketcha,1 A. Manbachi,1 S. Vogt,3 G. Kleinszig,3 A. J. Khanna,4 J.-P. Wolinksy,5 G. Osgood,4 and J. H. Siewerdsen1,2,5T. De Silva

1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD

Find articles by T. De SilvaJ. Punnoose

1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD

Find articles by J. PunnooseA. Uneri

1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD

Find articles by A. UneriJ. Goerres

1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD

Find articles by J. GoerresM. Jacobson

1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD

Find articles by M. JacobsonM. D. Ketcha

1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD

Find articles by M. D. KetchaA. Manbachi

1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD

Find articles by A. ManbachiS. Vogt

3Siemens Healthineers, Erlangen Germany

Find articles by S. VogtG. Kleinszig

3Siemens Healthineers, Erlangen Germany

Find articles by G. KleinszigA. J. Khanna

4Orthopaedic Surgery, Johns Hopkins University, Baltimore MD

Find articles by A. J. KhannaJ.-P. Wolinksy

5Department of Neurosurgery, Johns Hopkins University, Baltimore MD

Find articles by J.-P. WolinksyG. Osgood

4Orthopaedic Surgery, Johns Hopkins University, Baltimore MD

Find articles by G. OsgoodJ. H. Siewerdsen

1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD

2Russel H. Morgan Department of Radiology, Johns Hopkins University, Baltimore MD

5Department of Neurosurgery, Johns Hopkins University, Baltimore MD

Find articles by J. H. SiewerdsenAuthor information Copyright and License information PMC Disclaimer1Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD2Russel H. Morgan Department of Radiology, Johns Hopkins University, Baltimore MD3Siemens Healthineers, Erlangen Germany4Orthopaedic Surgery, Johns Hopkins University, Baltimore MD5Department of Neurosurgery, Johns Hopkins University, Baltimore MDPMC Copyright notice AbstractIntroduction

Fluoroscopically guided procedures often involve repeated acquisitions for C-arm positioning at the cost of radiation exposure and time in the operating room. A virtual fluoroscopy system is reported with the potential of reducing dose and time spent in C-arm positioning, utilizing three key advances: robust 3D-2D registration to a preoperative CT; real-time forward projection on GPU; and a motorized mobile C-arm with encoder feedback on C-arm orientation.

Method

Geometric calibration of the C-arm was performed offline in two rotational directions (orbit α, orbit β). Patient registration was performed using image-based 3D-2D registration with an initially acquired radiograph of the patient. This approach for patient registration eliminated the requirement for external tracking devices inside the operating room, allowing virtual fluoroscopy using commonly available systems in fluoroscopically guided procedures within standard surgical workflow. Geometric accuracy was evaluated in terms of projection distance error (PDE) in anatomical fiducials. A pilot study was conducted to evaluate the utility of virtual fluoroscopy to aid C-arm positioning in image guided surgery, assessing potential improvements in time, dose, and agreement between the virtual and desired view.

Results

The overall geometric accuracy of DRRs in comparison to the actual radiographs at various C-arm positions was PDE (mean ± std) = 1.6 ± 1.1 mm. The conventional approach required on average 8.0 ± 4.5 radiographs spent “fluoro hunting” to obtain the desired view. Positioning accuracy improved from 2.6° ± 2.3° (in α) and 4.1° ± 5.1° (in β) in the conventional approach to 1.5° ± 1.3° and 1.8° ± 1.7°, respectively, with the virtual fluoroscopy approach.

Conclusion

Virtual fluoroscopy could improve accuracy of C-arm positioning and save time and radiation dose in the operating room. Such a system could be valuable to training of fluoroscopy technicians as well as intraoperative use in fluoroscopically guided procedures.

1. INTRODUCTION

Fluoroscopy is a common imaging modality for guiding surgical procedures. Many orthopaedic, neuro-, and ortho-trauma procedures typically require radiographic visualization of anatomy-specific views with surgical instrumentation and implants. In obtaining the desired view, repeated C-arm fluoroscopy images are often acquired, where radiology technicians use a trial-and-error approach of ‘fluoro hunting’ at the expense of time and radiation exposure to the patient as well as personnel. To save time and radiation dose in the operating room, the methods proposed in this work generate virtual fluoroscopy to assist the surgeon and/or radiology technician in C-arm positioning using a preoperative CT image that is commonly available for patients who undergo surgery.

Fluoroscopy simulation methods that have been previously proposed relied upon external tracking systems to align the patient position relative to the C-arm imaging coordinate system and were primarily intended for surgical training purposes [1], [2]. Considering the task of C-arm positioning, the use and adaptation of external tracking systems are challenging due to the line-of-sight requirements and the addition of cumbersome hardware equipment in the operating rooms. As a result, C-arm positioning is performed without any assistance from fluoroscopy simulation within the current standard practice in image-guided surgery. In the solution proposed below, we perform patient registration using image-based 3D-2D registration by acquiring a radiograph of the patient obviating the requirement to include tracking hardware equipment. Previous solutions have also been proposed to align the patient position via 3D-2D registration in the context of image guided radiation therapy [3]. Achieving accurate 3D-2D registration can be challenging with the presence of surgical instrumentation in image-guided surgery applications. We utilize robust 3D-2D registration approaches[4]–[7] previously developed and validated in clinical images [8], [9]. Modern C-arms have the capability to track and record the motion of the C-arm gantry during operation. Using the encoded positions of the C-arm and an entirely image-based method for patient registration, virtual fluoroscopy can be generated in real-time during the procedure. Radiographs acquired for localization purposes at the beginning of the procedure can be used to perform registration without imposing a burden to the surgical workflow. The following sections present a virtual fluoroscopy system based on fast 3D-2D registration, evaluate its geometric accuracy, and assess the potential utility of improving the C-arm positioning in a pilot study conducted using a realistic pelvis phantom.

2. METHODS2.1 Digitally reconstructed radiograph (DRR) generation

Using a preoperative CT image of the patient, virtual fluoroscopy is generated by computing a simulated x-ray image referred to as a digitally reconstructed radiograph (DRR). To generate DRRs, CT data in Hounsfield units (HU) are first converted to linear attenuation coefficient (mm−1) using the attenuation coefficient of water (μwater) and a ray-tracing based tri-linear interpolation method implemented according to [10]. GPU-based parallel implementation using C++/CUDA was devised for fast and real-time computation of DRRs. Generating accurate DRRs resembling a radiograph at a given C-arm position depends on estimating the relative position between the C-arm and the patient in the world coordinate system (TW) of the operating room. To achieve this, the motion of the C-arm is measured during its manipulation using mechanical encoders attached to certain degrees-of-freedom (DoF) of the C-arm. In this work, we measured the two major rotational DoFs; (1) rotations within the plane of C-arm gantry (henceforth referred to as ‘orbital’ and denoted by α), (2) rotations perpendicular to the plane of C-arm gantry (henceforth referred to as ‘angular’ and denoted by β). To calculate the relative transform (Tpd) between the patient (p) and the C-arm detector (d), geometric calibration of the C-arm and patient registration are necessary.

2.2 Geometric calibration of the C-arm

The geometric calibration of the C-arm provides the position of the detector relative to the world coordinate system (Tdw) for given angle encoder values. While a number of geometric calibration methods have been reported, we performed the geometric calibration in the two-dimensional space of orbital (α) and angular (β) values via 3D-2D registration. Radiographs acquired of an anthropomorphic phantom at various (α, β) positions of the C-arm were registered by optimizing the gradient orientation (GO) similarity metric with a preoperative CT image to calculate the extrinsics of the camera geometry [11]. Radiographs were acquired at every 2 degree intervals spanning a range of −180 < α < 180 and 0 < β < 40.

2.3 Patient registration

At the beginning of the procedure the position of the patient relative to the world coordinate system (Tpw) needs to be calculated. We propose to achieve this step also via 3D-2D registration using an initially acquired radiographic view and the preoperative CT image of the patient. While the registration can be performed using as little as a single radiograph, multiple radiographs could improve accuracy of the patient position evaluation. Our registration framework has shown to be robust against realistic scenarios encountered in clinical images, such as content mismatch due to surgical instrumentation and implants. Similarity metric evaluated using gradient orientation (GO) was optimized using the multi-start covariance-matrix-adaptation evolution-strategy (CMA-ES) in a 6 DoF search space [4]. GO similarity has shown robustness against mismatch, while the multi-start CMA-ES search strategy provides robustness against susceptibility to local optima. By combining geometric calibration and patient registration, the relative transformation between the patient and the C-arm detector Tpd can be computed to accurately generate DRRs.

2.4 Experiments

A pilot study was performed using a mobile C-arm (Cios Alpha, Siemens Healthcare, Erlangen, Germany) to assess the utility of simulated fluoroscopy in comparison to the conventional ‘fluoro-hunting’ approach. Patient registration was performed using an initially acquired posterior-anterior (PA) radiograph of an anthropomorphic thorax phantom. The C-arm was then positioned by varying the rotations in orbital and angular directions in 10° increments within a range −40° < α < 40° and 0° < β < 40°. The acquired radiographs for each C-arm position were assessed with the corresponding virtual fluoroscopy image to evaluate geometric accuracy. The accuracy was quantified by calculating the projection distance error (PDE) using the manually identified fiducials between the radiographs and the CT image.

A pilot study was designed using an anthropomorphic abdominal phantom to evaluate the utility of virtual fluoroscopy for C-arm positioning during image-guided surgery (Figure 2A). Five clinically relevant radiographic views (Figure 2B–E) as utilized in pelvic surgery were selected as target views for the C-arm operator. For each case, the C-arm was positioned to obtain the desired view using both conventional “fluoro hunting” and virtual fluoroscopy approaches. Four C-arm operators (engineers trained on pelvis anatomy and pertinent radiographic views) performed the experiment on different days to minimize bias associated with memory. The order of conventional and simulation approaches was randomized among users to minimize the bias due to learning effects. The number of radiographs required to obtain a certain view and the final view achieved by the operator were recorded for each trial. The accuracy of each obtained view was quantified via angle positioning errors in orbital and angular directions in comparison to the desired ground truth view displayed to the operator. Normalized cross correlation (NCC) between the obtained and ground truth images was also computed as an image-similarity-based figure-of-merit.

Open in a separate windowFigure 2

(A) Experiment set up to evaluate utility of virtual fluoroscopy. (B–F) Anatomy specific desired views of the pelvis shown to the C-arm operator as targets.

3. RESULTS3.1 Geometric accuracy assessment

The accuracy of the geometric calibration of the C-arm for the orbital and angular rotations was quantified using manually identified corresponding pairs of anatomical locations was PDE (mean ± std) = 1.1 ± 0.9 mm. The overall accuracy of generating DRRs in comparison to the actual radiographs at different orbital and angular C-arm positions was found to be PDE = 1.6 ± 1.1 mm. Figure 3A shows PDE distributions separately across variations in angular positions with at a fixed orbital positions and variations in orbital positions with fixed angular positions. Considering the non-isocentric nature of the C-arm motions in the orbital direction, the geometric calibration and patient registration using image-based registration achieved successful performance in both the directions with comparable accuracies. Figure 3B illustrates the alignment of a radiograph and the corresponding DRR qualitatively. Such similarity in the image pair facilitates virtual fluoroscopy to be used as a guidance tool for accurate C-Arm positioning.

Open in a separate windowFigure 3

A: PDE distributions at fixed orbital and various angular values and fixed angular and various orbital positions. B: Comparison of a radiographs (left) and the corresponding DRR (right) showing the similarity of the simulated and actual radiograph. Canny edges from the DRR are shown in yellow on the actual radiograph.

3.2 Utility assessment

Compared to the single radiograph required to position the C-arm with the aid of fluoroscopy simulation, the conventional approach required on average 8.0 ± 4.5 radiographs to obtain the desired view. Among different views within this approach, the number of radiographs required varied from the smallest of 5.0 ± 0.8 radiographs (for the PA view) to the largest of 11.7 ± 6.8 radiographs (for the Judet view). Figure 4A compares the distributions of angle errors in C-arm positioning for conventional and simulation approaches. Positioning accuracy in the orbital direction improved from 2.6° ± 2.3° (mean ± std) in the conventional approach to 1.5° ± 1.3° when using fluoroscopy simulation, whereas the angular accuracy improved from 4.1° ± 5.1° to 1.8° ± 1.7°. As illustrated in Figure 4B similarity between the obtained and desired views as measured using NCC improved from 0.76 ± 0.19 with the conventional approach to 0.85 ± 0.14 with fluoroscopy simulation.

Open in a separate windowFigure 4

A: Orbital and angular error distributions for standard and conventional approaches. B: NCC distributions calculated between obtained and desired views for the two methods.

Figure 5 demonstrates the improvements of virtual fluoroscopy qualitatively where the extracted canny edges from the views obtained by four different operators were overlaid in the desired view for conventional (top row) and simulation approaches (bottom row). Under fluoroscopy simulation, all views except the lateral view showed a decrease in variability among operators indicating the potential of virtual fluoroscopy to more consistently obtain the desired view. The reason for the high variability in the lateral view under virtual fluoroscopy could be the less definitive anatomic description that satisfied a broad range of angles.

Open in a separate windowFigure 5

Variability among operators in obtaining the five target views using conventional and simulation methods. Canny edges extracted from the obtained image of each operator are overlaid in a separate color on the ground truth image. Note the dispersion of edges in the Conventional approach compared to the more reproducible and accurate edges in the Simulation (virtual fluoroscopy).

4. CONCLUSIONS

This work demonstrated accurate methods for generating virtual fluoroscopy using image-based registration for patient registration and the geometric calibration of the C-arm. The pilot study indicated that the system could potentially decrease the number of views required to position the C-arm during surgery and aid in improving the geometric accuracy of positioning the C-arm to obtain an anatomically specific view. With this approach, the patient registration can be updated using each radiograph acquired during the procedure to compensate for any motion during surgery. This approach for virtual fluoroscopy does not add external hardware (e.g., trackers) or other equipment in the operating room and thus has the potential to translate to clinical use with systems already within the surgical arsenal and within standard OR workflow.

​ Open in a separate windowFigure 1

DRR generation using angle encoder readings (α and β) from the C-arm. Transformation of the patient relative to the C-arm detector Tpd is computed using initial geometric calibration and patient registration steps.

Acknowledgments

This work was supported by NIH Grant No. R01-EB-017226 and academic-industry collaboration with Siemens Healthcare (XP Division, Erlangen Germany). The authors extend their thanks to Jessica Wood, Bonnie Grantland, Lauryn Hancock, Aris Thompson, Julia Stupi, and Shewaferaw Lema (Department of Radiology) for valuable discussion and participation in the user study.

References1. Gong RH, Jenkins B, Sze RW, Yaniv Z. A Cost Effective and High Fidelity Fluoroscopy Simulator using the Image-Guided Surgery Toolkit (IGSTK) Med Imaging 2014 Image-Guided Proced Robot Interv Model. 2014;9036:11. [Google Scholar]2. Bott OJ, Dresing K, Wagner M, Raab B-W, Teistler M. Informatics in radiology: use of a C-arm fluoroscopy simulator to support training in intraoperative radiography. Radiographics, vol. 2011;31:E64–E74. [PubMed] [Google Scholar]3. Munbodh R, Chen Z, Jaffray DA, Moseley DJ, Knisely JP, Duncan JS. Automated 2D-3D registration of portal images and CT data using line-segment enhancement. Med Phys vol. 2008;35(10):4352–4361. [PMC free article] [PubMed] [Google Scholar]4. De Silva T, Uneri A, Ketcha MD, Reaungamornrat S, Kleinszig G, Vogt S, Aygun N, Lo S-F, Wolinsky J-P, Siewerdsen JH. 3D–2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch. Phys Med Biol. 2016 Apr;61(8):3009–3025. [PMC free article] [PubMed] [Google Scholar]5. Ketcha MD, De Silva T, Uneri A, Kleinszig G, Vogt S, Wolinsky J-P, Siewerdsen JH. Automatic Masking for Robust 3D-2D Image Registration in Image-Guided Spine Surgery. SPIE Medical Imaging. 2016 [PMC free article] [PubMed] [Google Scholar]6. Uneri A, De Silva T, Stayman JW, Kleinszig G, Vogt S, Khanna AJ, Gokaslan ZL, Wolinsky J-P, Siewerdsen JH. Known-component 3D–2D registration for quality assurance of spine surgery pedicle screw placement. Phys Med Biol. 2015 Oct;60(20):8007–8024. [PMC free article] [PubMed] [Google Scholar]7. Uneri A, Goerres J, De Silva T, Jacobson M, Ketcha M, Reaungamornrat S, Kleinszig G, Khanna SVA, Wolinsky J-P, Siewerdsen J. Deformable 3D-2D registration of known components for image guidance in spine surgery. Medical image computing and computer-assisted intervention (MICCAI) 2016 in press. [PMC free article] [PubMed] [Google Scholar]8. Lo S-FL, Otake Y, Puvanesarajah V, Wang AS, Uneri A, De Silva T, Vogt S, Kleinszig G, Elder BD, Goodwin CR, Kosztowski TA, Liauw JA, Groves M, Bydon A, Sciubba DM, Witham TF, Wolinsky J-P, Aygun N, Gokaslan ZL, Siewerdsen JH. Automatic localization of target vertebrae in spine surgery: clinical evaluation of the LevelCheck registration algorithm. Spine (Phila Pa 1976) 2015;40(8):E476–83. [PMC free article] [PubMed] [Google Scholar]9. De Silva T, Lo S-FL, Aygun N, Aghion DM, Boah A, Petteys R, Uneri A, Ketcha MD, Yi T, Vogt S, Kleinszig G, Wei W, Weiten M, Ye X, Bydon A, Sciubba DM, Witham TF, Wolinsky J-P, Siewerdsen JH. Utility of the LevelCheck Algorithm for Decision Support in Vertebral Localization. Spine (Phila Pa 1976) 2016 Mar;41(20):E1249–E1256. [PMC free article] [PubMed] [Google Scholar]10. Cabral B, Cam N, Foran J. Accelerated volume rendering and tomographic reconstruction using texture mapping hardware. Proc 1994 Symp Vol Vis. 1994:91–98. [Google Scholar]11. Ouadah S, Stayman JW, Gang GJ, Ehtiati T, Siewerdsen JH. Self-calibration of cone-beam CT geometry using 3D–2D image registration. Phys Med Biol. 2016;61(7):2613. [PMC free article] [PubMed] [Google Scholar]


【本文地址】

公司简介

联系我们

今日新闻

    推荐新闻

    专题文章
      CopyRight 2018-2019 实验室设备网 版权所有